Àá½Ã¸¸ ±â´Ù·Á ÁÖ¼¼¿ä. ·ÎµùÁßÀÔ´Ï´Ù.
KMID : 1147120180240010001
Journal of the Korean Society of Imaging Informatics in Medicine
2018 Volume.24 No. 1 p.1 ~ p.10
Vulnerability of Deep Learning based Computer-Aided Diagnosis: Experimental Adversarial Attack Against CT Lung Nodule Detection Model
Lee Jae-Won

Kim Jong-Hyo
Abstract
Background: Recent developments of deep learning technique have drawn attention from medical imaging community with outstanding performance and appear to give promise for future applications in computer-aided diagnosis. However, there still exist concerns about inherent uncertainty of the behavior of deep learning models, which needs to be thoroughly investigated before clinical translation. Adversarial attack is a useful technique for testing deep learning models by exposing them to a set of intentionally perturbed examples and evaluating the performance degradation. This study investigates the vulnerability of deep learning models for basic object classification and CT nodule detection tasks.
Materials and Methods: We evaluated the vulnerability of three deep learning models each trained with MNIST, CIFAR-10, and LIDC-IDRI dataset. Four latest adversarial attack algorithms were employed to generate adversarial examples for perturbing the first two deep learning models, and selected an appropriate attack algorithm for use in the test of the deep learning model for CT lung nodule detection.

Results: The classification performance of MNIST-trained deep learning model degraded from 0.98 before attack to 0.70, 0.78, 0.01, and 0.02 after attack by four different algorithms. The performances of CIFAR-10-trained model also degraded from 0.73 before attack to 0.11, 0.16, 0.13, and 0.02 after attacks. Performance of the CT lung nodule detection model showed gradual degradation according to the increasing degree of perturbation: AUROC was 0.95 before attack, and decreased to 0.915, 0.903, 0.890 after attack; sensitivity was 0.877 before attack, and decreased to 0.854, 0.807, 0.717 after attack.

Conclusion: Deep learning models possessed vulnerability to perturbed input, and showed varying degree of performance degradation to different attacks. Deep learning models for CAD system needs to be verified with respect to their vulnerability to perturbation and adversarial attacks.
KEYWORD
Deep Learning, Computer-Aided Detection and Diagnosis (CAD), Threats, Vulnerability, Adversarial Attack
FullTexts / Linksout information
Listed journal information